Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 1022420210130030071
Phonetics and Speech Sciences
2021 Volume.13 No. 3 p.71 ~ p.78
Text-to-speech with linear spectrogram prediction for quality and speed improvement
Yoon Hye-Bin

Nam Ho-Sung
Abstract
Most neural-network-based speech synthesis models utilize neural vocoders to convert mel-scaled spectrograms into high-quality, human-like voices. However, neural vocoders combined with mel-scaled spectrogram prediction models demand considerable computer memory and time during the training phase and are subject to slow inference speeds in an environment where GPU is not used. This problem does not arise in linear spectrogram prediction models, as they do not use neural vocoders, but these models suffer from low voice quality. As a solution, this paper proposes a Tacotron 2 and Transformer-based linear spectrogram prediction model that produces high-quality speech and does not use neural vocoders. Experiments suggest that this model can serve as the foundation of a high-quality text-to-speech model with fast inference speed.
KEYWORD
speech synthesis, machine learning, artificial intelligence, text-to-speech (TTS)
FullTexts / Linksout information
Listed journal information
ÇмúÁøÈïÀç´Ü(KCI)